Providing a Structured Method for Integrating Non-Speech Audio into Human-Computer Interfaces
نویسنده
چکیده
This thesis provides a framework for integrating non-speech sound into humancomputer interfaces. Previously there was no structured way of doing this, it was done in an ad hoc manner by individual designers. This led to ineffective uses of sound. In order to add sounds to improve usability two questions must be answered: What sounds should be used and where is it best to use them? With these answers a structured method for adding sound can be created An investigation of earcons as a means of presenting information in sound was undertaken. A series of detailed experiments showed that earcons were effective, especially if musical timbres were used. Parallel earcons were also investigated (where two earcons are played simultaneously) and an experiment showed that they could increase sound presentation rates. From these results guidelines were drawn up for designers to use when creating usable earcons. These formed the first half of the structured method for integrating sound into interfaces. An informal analysis technique was designed to investigate interactions to identify situations where hidden information existed and where non-speech sound could be used to overcome the associated problems. Interactions were considered in terms of events, status and modes to find hidden information. This information was then categorised in terms of the feedback needed to present it. Several examples of the use of the technique were presented. This technique formed the second half of the structured method. The structured method was evaluated by testing sonically-enhanced scrollbars, buttons and windows. Experimental results showed that sound could improve usability by increasing performance, reducing time to recover from errors and reducing workload. There was also no increased annoyance due to the sound. Thus the structured method for integrating sound into interfaces was shown to be effective when applied to existing interface widgets.
منابع مشابه
The Application of a Method for Integrating Non-speech Audio into Human-computer Interfaces
This paper describes the application of a structured method for integrating non-speech sound into graphical interfaces. The method analyses interactions in terms of event, status and mode information. It then categorises this information in terms of the feedback needed to present it. This is then combined with guidelines for creating sounds to generate the auditory feedback required. As an exam...
متن کاملA New Algorithm for Voice Activity Detection Based on Wavelet Packets (RESEARCH NOTE)
Speech constitutes much of the communicated information; most other perceived audio signals do not carry nearly as much information. Indeed, much of the non-speech signals maybe classified as ‘noise’ in human communication. The process of separating conversational speech and noise is termed voice activity detection (VAD). This paper describes a new approach to VAD which is based on the Wavelet ...
متن کاملA Distributed System for Device Diagnostics Utilizing Augmented Reality, 3D Audio, and Speech Recognition
Technology developed for Virtual Reality (VR) has been established as a valuable tool for providing an intuitive human-computer interface over the past years. This technology is now being applied in integration with real world environment in Augmented Reality (AR). The Rockwell Science Center (RSC) is developing and integrating components for a system using AR techniques for visualization and a...
متن کاملUsiGesture: a structured method for engineering pen-based gestures in graphical user interfaces
UsiGesture is aimed at providing a contribution in the field of Engineering of Interactive Systems by supporting the work of engineers, programmers and designers during the elaboration of graphical user interfaces integrating pen-based gesture recognition on 2D surfaces. It proposes methodological guidance for incorporating pen-based gestures into graphical user interfaces through a structured ...
متن کاملRepresenting Complex Hierarchies with Earcons
This paper describes an experiment to discover if structured audio messages called earcons could provide navigational cues in a complex menu hierarchy. A hierarchy of 25 nodes and four levels was created with an earcon for each node. Rules were designed for the creation of the earcon at each node. The results showed that participants could recall over 80% of the earcons they heard, indicating t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1994